The Relationship between the Stochastic Maximum Principle and the Dynamic Programming in Singular Control of Jump Diffusions
نویسندگان
چکیده
The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint equation coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy, for a geometric Brownian motion with jumps.
منابع مشابه
Singular stochastic control and optimal stopping with partial information of jump diffusions
We study partial information, possibly non-Markovian, singular stochastic control of jump diffusions and obtain general maximum principles. The results are used to find connections between singular stochastic control, reflected BSDEs and optimal stopping in the partial information case. Mathematics Subject Classification 2010: 93E20, 60H07, 60H10, 60HXX, 60J75
متن کاملControl Problem and its Application in Management and Economic
The control problem and Dynamic programming is a powerful tool in economics and management. We review the dynamic programming problem from its beginning up to its present stages. A problem which was involved in physics and mathematics in I 7” century led to a branch of mathematics called calculus of variation which was used in economic, and management at the end of the first quarter of the 20” ...
متن کاملA Multi-Stage Single-Machine Replacement Strategy Using Stochastic Dynamic Programming
In this paper, the single machine replacement problem is being modeled into the frameworks of stochastic dynamic programming and control threshold policy, where some properties of the optimal values of the control thresholds are derived. Using these properties and by minimizing a cost function, the optimal values of two control thresholds for the time between productions of two successive nonco...
متن کاملStochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry
We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...
متن کاملNecessary Conditions of Optimization for Partially Observed Controlled Diffusions
Necessary conditions are derived for stochastic partially observed control problems when the control enters the drift coefficient and correlation between signal and observation noise is allowed. The problem is formulated as one of complete information, but instead of considering directly the equation satisfied by the unnormalized conditional density of nonlinear filtering, measure-valued decomp...
متن کامل